Game theory is the study of strategic decision making. More formally, it is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers."[1] An alternative term suggested "as a more descriptive name for the discipline" is interactive decision theory.[2] Game theory is mainly used in economics, political science, and psychology, as well as logic and biology. The subject first addressed zero-sum games, such that one person's gains exactly equal net losses of the other participant(s). Today, however, game theory applies to a wide range of class relations, and has developed into an umbrella term for the logical side of science, to include both human and non-humans, like computers. Classic uses include a sense of balance in numerous games, where each person has found or developed a tactic that cannot successfully better his results, given the other approach.
Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.
This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. Eight game-theorists have won the Nobel Memorial Prize in Economic Sciences, and John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology.
Early discussions of examples of two-person games occurred long before the rise of modern, mathematical game theory. The first known discussion of game theory occurred in a letter written by James Waldegrave in 1713. In this letter, Waldegrave provides a minimax mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as a game-theoretic analysis of the ways states can be expected to behave under different systems of taxation.[3][4] In his 1838 Recherches sur les principes mathématiques de la théorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth), Antoine Augustin Cournot considered a duopoly and presents a solution that is a restricted version of the Nash equilibrium.
The Danish mathematician Zeuthen proved that a mathematical model has a winning strategy by using Brouwer's fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix was symmetric. Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false.
Game theory did not really exist as a unique field until John von Neumann published a paper in 1928.[5] Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. Von Neumann's work in game theory culminated in the 1944 book Theory of Games and Economic Behavior by von Neumann and Oskar Morgenstern. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. During this time period, work on game theory was primarily focused on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.[6]
In 1950, the first discussion of the prisoner's dilemma appeared, and an experiment was undertaken on this game at the RAND corporation. Around this same time, John Nash developed a criterion for mutual consistency of players' strategies, known as Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. This equilibrium is sufficiently general to allow for the analysis of non-cooperative games in addition to cooperative ones.
Game theory experienced a flurry of activity in the 1950s, during which time the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. In addition, the first applications of Game theory to philosophy and political science occurred during this time.
In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium (later he would introduce trembling hand perfection as well). In 1967, John Harsanyi developed the concepts of complete information and Bayesian games. Nash, Selten and Harsanyi became Economics Nobel Laureates in 1994 for their contributions to economic game theory.
In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection, and common knowledge[7] were introduced and analyzed.
In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing an equilibrium coarsening, correlated equilibrium, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.
In 2007, Leonid Hurwicz, together with Eric Maskin and Roger Myerson, was awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory." Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict (Myerson 1997). Hurwicz introduced and formalized the concept of incentive compatibility.
The games studied in game theory are well-defined mathematical objects. A game consists of a set of players, a set of moves (or strategies) available to those players, and a specification of payoffs for each combination of strategies. Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.
The extensive form can be used to formalize games with a time sequencing of moves. Games here are played on trees (as pictured to the left). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of a decision tree. (Fudenberg & Tirole 1991, p. 67)
In the game pictured to the left, there are two players. Player 1 moves first and chooses either F or U. Player 2 sees Player 1's move and then chooses A or R. Suppose that Player 1 chooses U and then Player 2 chooses A, then Player 1 gets 8 and Player 2 gets 2.
The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e., the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.)
|
Player 2
chooses Left |
Player 2
chooses Right |
Player 1
chooses Up |
4, 3 |
–1, –1 |
Player 1
chooses Down |
0, 0 |
3, 4 |
Normal form or payoff matrix of a 2-player, 2-strategy game |
The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and pay-offs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3.
When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.
Every extensive-form game has an equivalent normal-form game, however the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical. (Leyton-Brown & Shoham 2008, p. 35)
In games that possess removable utility separate rewards are not given; rather, the characteristic function decides the payoff of each unity. The idea is that the unity that is 'empty', so to speak, does not receive a reward at all.
The origin of this form is to be found in John von Neumann and Oskar Morgenstern's book; when looking at these instances, they guessed that when a union C appears, it works against the fraction (N/C) as if two individuals were playing a normal game. The balanced payoff of C is a basic function. Although there are differing examples that help determine coalitional amounts from normal games, not all appear that in their function form can be derived from such.
Formally, a characteristic function is seen as: (N,v), where N represents the group of people and v:2^N-->R is a normal utility.
Such characteristic functions have expanded to describe games where there is no removable utility.
The characteristic function form ignores the possible externalities of coalition formation. In the partition function form the payoff of a coalition depends not only on its members, but also on the way the rest of the players are partitioned (Thrall & Lucas 1963).
As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well.
Game-theoretic analysis was initially used to study animal behavior by Ronald Fisher in the 1930s (although even Charles Darwin makes a few informal game-theoretic statements). This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his book Evolution and the Theory of Games.
In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior.[8] In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic arguments of this type can be found as far back as Plato.[9]
The first known use is to describe and model how human populations behave. Some scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has come under recent criticism. First, it is criticized because the assumptions made by game theorists are often violated. Game theorists may assume players always act in a way to directly maximize their wins (the Homo economicus model), but in practice, human behavior often deviates from this model. Explanations of this phenomenon are many; irrationality, new models of deliberation, or even different motives (like that of altruism). Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, additional criticism of this use of game theory has been levied because some experiments have demonstrated that individuals do not play equilibrium strategies. For instance, in the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments.[10]
Alternatively, some authors claim that Nash equilibria do not provide predictions for human populations, but rather provide an explanation for why populations that play Nash equilibria remain in that state. However, the question of how populations reach those points remains open.
Some game theorists have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics).
|
Cooperate |
Defect |
Cooperate |
-1, -1 |
-10, 0 |
Defect |
0, -10 |
-5, -5 |
The Prisoner's Dilemma |
On the other hand, some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a Nash equilibrium of a game constitutes one's best response to the actions of the other players, playing a strategy that is part of a Nash equilibrium seems appropriate. However, this use for game theory has also come under criticism. First, in some cases it is appropriate to play a non-equilibrium strategy if one expects others to play non-equilibrium strategies as well. For an example, see Guess 2/3 of the average.
Second, the Prisoner's dilemma presents another potential counterexample. In the Prisoner's Dilemma, each player pursuing his own self-interest leads both players to be worse off than had they not pursued their own self-interests.
|
This article is incomplete. Please help to improve the article, or discuss the issue on the talk page. (November 2010) |
Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents.[11] Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, fair division, duopolies, oligopolies, social network formation, agent-based computational economics,[12] general equilibrium, mechanism design,[13] and voting systems,[14] and across such broad areas as experimental economics,[15] behavioral economics,[16] information economics,[17] industrial organization,[18] and political economy.[19][20]
This research usually focuses on particular sets of strategies known as equilibria in games. These "solution concepts" are usually based on what is required by norms of rationality. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. So, if all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.
The payoffs of the game are generally taken to represent the utility of individual players. Often in modeling situations the payoffs represent money, which presumably corresponds to an individual's utility. This assumption, however, can be faulty.
A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of some particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Naturally one might wonder to what use should this information be put. Economists and business professors suggest two primary uses (noted above): descriptive and prescriptive.[8]
The application of game theory to political science is focused in the overlapping areas of fair division, political economy, public choice, war bargaining, positive political theory, and social choice theory. In each of these areas, researchers have developed game-theoretic models in which the players are often voters, states, special interest groups, and politicians.
For early examples of game theory applied to political science, see the work of Anthony Downs. In his book An Economic Theory of Democracy (Downs 1957), he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. Downs first shows how the political candidates will converge to the ideology preferred by the median voter if voters are fully informed, but then argues that voters choose to remain rationally ignorant which allows for candidate divergence.
A game-theoretic explanation for democratic peace is that public and open debate in democracies send clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy (Levy & Razin 2003).
|
Hawk |
Dove |
Hawk |
20, 20 |
80, 40 |
Dove |
40, 80 |
60, 60 |
The hawk-dove game |
Unlike economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less on equilibria that correspond to a notion of rationality, but rather on ones that would be maintained by evolutionary forces. The best known equilibrium in biology is known as the evolutionarily stable strategy (or ESS), and was first introduced in (Smith & Price 1973). Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium.
In biology, game theory has been used to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. (Fisher 1930) suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.
Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication (Harper & Maynard Smith 2003). The analysis of signaling games and other communication games has provided some insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion, see Butterfly Economics.
Biologists have used the game of chicken to analyze fighting behavior and territoriality.[citation needed]
Maynard Smith, in the preface to Evolution and the Theory of Games, writes, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.[21]
One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to Vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival.[22] All of these actions increase the overall fitness of a group, but occur at a cost to the individual.
Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary reasoning behind this selection with the equation c<b*r where the cost ( c ) to the altruist must be less than the benefit ( b ) to the recipient multiplied by the coefficient of relatedness ( r ). The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on, (through survival of its offspring) can forgo the option of having offspring itself because the same number of alleles are passed on. Helping a sibling for example (in diploid animals), has a coefficient of ½, because (on average) an individual shares ½ of the alleles in its sibling's offspring. Ensuring that enough of a sibling’s offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring.[22] The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a co-efficient that was ½ in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller.
Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems.
Separately, game theory has played a role in online algorithms. In particular, the k-server problem, which has in the past been referred to as games with moving costs and request-answer games (Ben David, Borodin & Karp et al. 1994). Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, and especially of online algorithms.
The emergence of the internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory[23] and within it algorithmic mechanism design[24] combine computational algorithm design and analysis of complex systems with economic theory.[25]
|
Stag |
Hare |
Stag |
3, 3 |
0, 2 |
Hare |
2, 0 |
2, 2 |
Stag hunt |
Game theory has been put to several uses in philosophy. Responding to two papers by W.V.O. Quine (1960, 1967), Lewis (1969) used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis (Skyrms (1996), Grim, Kokalis, and Alai-Tafti et al. (2004)). Following Lewis (1969) game-theoretic account of conventions, Ullmann Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game.[26]
Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from agents' interactions. Philosophers who have worked in this area include Bicchieri (1989, 1993),[27] Skyrms (1990),[28] and Stalnaker (1999).[29]
In ethics, some authors have attempted to pursue the project, begun by Thomas Hobbes, of deriving morality from self-interest. Since games like the Prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see Gauthier (1986) and Kavka (1986).[30]
Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the Prisoner's dilemma, Stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., Skyrms (1996, 2004) and Sober and Wilson (1999)).
Some assumptions used in some parts of game theory have been challenged in philosophy; psychological egoism states that rationality reduces to self-interest—a claim debated among philosophers. (see Psychological egoism#Criticisms)
A game is cooperative if the players are able to form binding commitments. For instance the legal system requires them to adhere to their promises. In noncooperative games this is not possible.
Often it is assumed that communication among players is allowed in cooperative games, but not in noncooperative ones. However, this classification on two binary criteria has been questioned, and sometimes rejected (Harsanyi 1974).
Of the two types of games, noncooperative games are able to model situations to the finest details, producing accurate results. Cooperative games focus on the game at large. Considerable efforts have been made to link the two approaches. The so-called Nash-programme[clarification needed] has already established many of the cooperative solutions as noncooperative equilibria.
Hybrid games contain cooperative and non-cooperative elements. For instance, coalitions of players are formed in a cooperative game, but these play in a non-cooperative fashion.
|
E |
F |
E |
1, 2 |
0, 0 |
F |
0, 0 |
1, 2 |
An asymmetric game |
Main article:
Symmetric game
A symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If the identities of the players can be changed without changing the payoff to the strategies, then a game is symmetric. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games. Some scholars would consider certain asymmetric games as examples of these games as well. However, the most common payoffs for each of these games are symmetric.
Most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured to the right is asymmetric despite having identical strategy sets for both players.
|
A |
B |
A |
–1, 1 |
3, –3 |
B |
0, 0 |
–2, 2 |
A zero-sum game |
Zero-sum games are a special case of constant-sum games, in which choices by players can neither increase nor decrease the available resources. In zero-sum games the total benefit to all players in the game, for every combination of strategies, always adds to zero (more informally, a player benefits only at the equal expense of others). Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess.
Many games studied by game theorists (including the infamous prisoner's dilemma) are non-zero-sum games, because some outcomes have net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another.
Constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any game into a (possibly asymmetric) zero-sum game by adding an additional dummy player (often called "the board"), whose losses compensate the players' net winnings.
Simultaneous games are games where both players move simultaneously, or if they do not move simultaneously, the later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential games (or dynamic games) are games where later players have some knowledge about earlier actions. This need not be perfect information about every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while he does not know which of the other available actions the first player actually performed.
The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, and extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection.
A game of imperfect information (the dotted line represents ignorance on the part of player 2, formally called an
information set)
An important subset of sequential games consists of games of perfect information. A game is one of perfect information if all players know the moves previously made by all other players. Thus, only sequential games can be games of perfect information, since in simultaneous games not every player knows the actions of the others. Most games studied in game theory are imperfect-information games, although there are some interesting examples of perfect-information games, including the ultimatum game and centipede game. Recreational games of perfect information games include chess, go, and mancala. Many card games are games of imperfect information, for instance poker or contract bridge.
Perfect information is often confused with complete information, which is a similar concept. Complete information requires that every player know the strategies and payoffs available to the other players but not necessarily the actions taken. Games of incomplete information can be reduced, however, to games of imperfect information by introducing "moves by nature" (Leyton-Brown & Shoham 2008, p. 60).
Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess and go. Games that involve imperfect or incomplete information may also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing combinatorial elements in games. There are, however, mathematical tools that can solve particular problems and answer some general questions.[31]
Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as combinatorial and algebraic (and sometimes non-constructive) proof methods to solve games of certain types, including some "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory.[32][33] A typical game that has been solved this way is hex. A related field of study, drawing from computational complexity theory, is game complexity, which is concerned with estimating the computational difficulty of finding optimal strategies.[34]
Research in artificial intelligence has addressed both perfect and imperfect (or incomplete) information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alpha-beta pruning or use of artificial neural networks trained by reinforcement learning, which make games more tractable in computing practice.[31][35]
Main article:
Determinacy
Games, as studied by economists and real-world game players, are generally finished in finitely many moves. Pure mathematicians are not so constrained, and set theorists in particular study games that last for infinitely many moves, with the winner (or other payoff) not known until after all those moves are completed.
The focus of attention is usually not so much on what is the best way to play such a game, but simply on whether one or the other player has a winning strategy. (It can be proven, using the axiom of choice, that there are games—even with perfect information, and where the only outcomes are "win" or "lose"—for which neither player has a winning strategy.) The existence of such strategies, for cleverly designed games, has important consequences in descriptive set theory.
Much of game theory is concerned with finite, discrete games, that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any non-negative quantities, including fractional quantities.
Differential games such as the continuous pursuit and evasion game are continuous games where the evolution of the players' state variables is governed by differential equations. The problem of finding an optimal strategy in a differential game is closely related to the optimal control theory. In particular, there are two types of strategies: the open-loop strategies are found using the Pontryagin Maximum Principle while the closed-loop startegies are found using Bellman's Dynamic Programming method.
A particular case of differential games are the games with random time horizon[36]. In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectancy of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval.
Games with an arbitrary, but finite, number of players are often called n-person games (Luce & Raiffa 1957). Evolutionary game theory considers games involving a population of decision makers, where the frequency with which a particular decision is made can change over time in response to the decisions made by all individuals in the population. In biology, this is intended to model (biological) evolution, where genetically programmed organisms pass along some of their strategy programming to their offspring. In economics, the same theory is intended to capture population changes because people play the game many times within their lifetime, and consciously (and perhaps rationally) switch strategies (Webb 2007).
Individual decision problems with stochastic outcomes are sometimes considered "one-player games". These situations are not considered game theoretical by some authors.[by whom?] They may be modeled using similar tools within the related disciplines of decision theory, operations research, and areas of artificial intelligence, particularly AI planning (with uncertainty) and multi-agent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. using Markov decision processes (MDP).[citation needed]
Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves", also known as "moves by nature" (Osborne & Rubinstein 1994). This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game.
For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the minimax solution is that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen.[37] (See black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.)
General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation.[37]
These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory.
The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard (Howard 1971) whereby a situation is framed as a strategic game in which stakeholders try to realise their objectives by means of the options available to them. Subsequent developments have led to the formulation of Confrontation Analysis.
- Lists
- ^ Roger B. Myerson (1991). Game Theory: Analysis of Conflict, Harvard University Press, p. 1. Chapter-preview links, pp. vii-xi.
- ^ R. J. Aumann ([1987] 2008). "game theory," Introduction, The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
- ^ James Madison, Vices of the Political System of the United States, April, 1787. Link
- ^ Jack Rakove, "James Madison and the Constitution", History Now, Issue 13 September 2007. Link
- ^ J. v. Neumann (1928). "Zur Theorie der Gesellschaftsspiele," Mathematische Annalen, 100(1), p p. 295-320. English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959),Contributions to the Theory of Games, v. 4, p p. 13-42.
- ^ Leonard, Robert. Von Neumann, Morgenstern, and the Creation of Game Theory. Cambridge University Press, 2010
- ^ Although common knowledge was first discussed by the philosopher David Lewis in his dissertation (and later book) Convention in the late 1960s, it was not widely considered by economists until Robert Aumann's work in the 1970s.
- ^ a b Colin F. Camerer (2003). Behavioral Game Theory: Experiments in Strategic Interaction, pp. 5-7 (scroll to at 1.1 What Is Game Theory Good For?).
- ^ Ross, Don. "Game Theory". The Stanford Encyclopedia of Philosophy (Spring 2008 Edition). Edward N. Zalta (ed.). http://plato.stanford.edu/archives/spr2008/entries/game-theory/. Retrieved 2008-08-21.
- ^ Experimental work in game theory goes by many names, experimental economics, behavioral economics, and behavioural game theory are several. For a recent discussion, see Colin F. Camerer (2003). Behavioral Game Theory: Experiments in Strategic Interaction (description and Introduction, pp. 1–25).
- ^ • At JEL:C7 of the Journal of Economic Literature classification codes.
• R.J. Aumann (2008). "game theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
• Martin Shubik (1981). "Game Theory Models and Methods in Political Economy," in Kenneth Arrow and Michael Intriligator, ed., Handbook of Mathematical Economics, , v. 1, pp. 285-330.
• Carl Shapiro (1989). "The Theory of Business Strategy," RAND Journal of Economics, 20(1), pp. 125-137.
- ^ • Leigh Tesfatsion (2006). "Agent-Based Computational Economics: A Constructive Approach to Economic Theory," ch. 16, Handbook of Computational Economics, v. 2, pp. 831-880.
• Joseph Y. Halpern (2008). "computer science and game theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
- ^ • From The New Palgrave Dictionary of Economics (2008), 2nd Edition:
Roger B. Myerson. "mechanism design." Abstract.
_____. "revelation principle." Abstract.
• Tuomas Sandholm. "computing in mechanism design." Abstract.
• Noam Nisan and Amir Ronen (2001). "Algorithmic Mechanism Design," Games and Economic Behavior, 35(1-2), pp. 166–196.
• Noam Nisan et al., ed. (2007). Algorithmic Game Theory, Cambridge University Press. Description.
- ^ R. Aumann and S. Hart, ed., 1994. Handbook of Game Theory with Economic Applications, v. 2, outline links, ch. 30: "Voting Procedures" & ch. 31: "Social Choice."
- ^ • Vernon L. Smith, 1992. "Game Theory and Experimental Economics: Beginnings and Early Influences," in E. R. Weintraub, ed., Towards a History of Game Theory, pp. 241- 282.
• _____, 2001. "Experimental Economics," International Encyclopedia of the Social & Behavioral Sciences, pp. 5100-5108. Abstract per sect. 1.1 & 2.1.
• Charles R. Plott and Vernon L. Smith, ed., 2008. Handbook of Experimental Economics Results, v. 1, Elsevier, Part 4, Games, ch. 45-66 preview links.
• Vincent P. Crawford (1997). "Theory and Experiment in the Analysis of Strategic Interaction," in Advances in Economics and Econometrics: Theory and Applications, pp. 206-242. Cambridge. Reprinted in Colin F. Camerer et al., ed. (2003). Advances in Behavioral Economics, Princeton. 1986-2003 papers. Description, contents, and preview., Princeton, ch. 12.
• Martin Shubik, 2002. "Game Theory and Experimental Gaming," in R. Aumann and S. Hart, ed., Handbook of Game Theory with Economic Applications, Elsevier, v. 3, pp. 2327-2351. Abstract.
- ^ From The New Palgrave Dictionary of Economics (2008), 2nd Edition:
• Faruk Gul. "behavioural economics and game theory." Abstract.
• Colin F. Camerer. "behavioral game theory." Abstract.
• _____ (1997). "Progress in Behavioral Game Theory," Journal of Economic Perspectives, 11(4), p. 172 [pp. 167-188.
• _____ (2003). Behavioral Game Theory, Princeton. Description, preview ([ctrl]+), and ch. 1 link.
• _____, George Loewenstein, and Matthew Rabin, ed. (2003). Advances in Behavioral Economics, Princeton. 1986-2003 papers. Description, contents, and preview.
• Drew Fudenberg (2006). "Advancing Beyond Advances in Behavioral Economics," Journal of Economic Literature, 44(3), pp. 694-711.
- ^ • Eric Rasmusen (2007). Games and Information, 4th ed. Description and chapter-preview links.
• David M. Kreps (1990). Game Theory and Economic Modelling. Description.
• R. Aumann and S. Hart, ed. (1992, 2002). Handbook of Game Theory with Economic Applications v. 1, scroll to outline links, ch. 3-6 and v. 3, ch. 43.
- ^ • Jean Tirole (1988). The Theory of Industrial Organization, MIT Press. Description and chapter-preview links, pp. vii-ix, "General Organization," pp. 5-6, and "Non-Cooperative Game Theory: A User's Guide Manual,' " ch. 11, pp. 423-59.
• Kyle Bagwell and Asher Wolinsky (2002). "Game theory and Industrial Organization," ch. 49, Handbook of Game Theory with Economic Applications, v. 3, pp. 1851-1895.
• Martin Shubik (1959). Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games, Wiley. Description and review extract.
• _____ with Richard Levitan (1980). Market Structure and Behavior, Harvard University Press. Review extract.
- ^ • Martin Shubik (1981). "Game Theory Models and Methods in Political Economy," in Handbook of Mathematical Economics, v. 1, pp. 285-330.
•_____ (1987). A Game-Theoretic Approach to Political Economy. MIT Press. Description.
- ^ • Martin Shubik (1978). "Game Theory: Economic Applications," in W. Kruskal and J.M. Tanur, ed., International Encyclopedia of Statistics, v. 2, pp. 372–78.
• Robert Aumann and Sergiu Hart, ed. Handbook of Game Theory with Economic Applications (scrollable to chapter-outline or abstract links):
- 1992. v. 1; 1994. v. 2; 2002. v. 3.
- ^ Evolutionary Game Theory (Stanford Encyclopedia of Philosophy)
- ^ a b Biological Altruism (Stanford Encyclopedia of Philosophy)
- ^ Noam Nisan et al., ed. (2007). Algorithmic Game Theory, Cambridge University Press. Description.
- ^ Noam Nisan and Amir Ronen (2001). "Algorithmic Mechanism Design," Games and Economic Behavior, 35(1-2), pp. 166–196.
- ^ • Joseph Y. Halpern (2008). "computer science and game theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
• Yoav Shoham (2008). "Computer Science and Game Theory," Communications of the ACM, 51(8), pp. 75-79.
• Amy Greenwald and Michael L. Littman (2007). "Introduction to the Special Issue on Learning and Computational Game Theory," Machine Learning, 67(1-2), pp. 3-6. Preview and issue-article links.
- ^ E. Ullmann Margalit, The Emergence of Norms, Oxford University Press, 1977. C. Bicchieri, The Grammar of Society: the Nature and Dynamics of Social Norms, Cambridge University Press, 2006.
- ^ "Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge ", Erkenntnis 30, 1989: 69-85. See also Rationality and Coordination, Cambridge University Press, 1993.
- ^ The Dynamics of Rational Deliberation, Harvard University Press, 1990.
- ^ "Knowledge, Belief, and Counterfactual Reasoning in Games." In Cristina Bicchieri, Richard Jeffrey, and Brian Skyrms, eds., The Logic of Strategy. New York: Oxford University Press, 1999.
- ^ For a more detailed discussion of the use of Game Theory in ethics see the Stanford Encyclopedia of Philosophy's entry game theory and ethics.
- ^ a b Jörg Bewersdorff (2005). Luck, logic, and white lies: the mathematics of games. A K Peters, Ltd.. pp. ix-xii and chapter 31. ISBN 978-1-56881-210-6.
- ^ Albert, Michael H.; Nowakowski, Richard J.; Wolfe, David (2007). Lessons in Play: In Introduction to Combinatorial Game Theory. A K Peters Ltd. pp. 3–4. ISBN 978-1-56881-277-9.
- ^ Beck, József (2008). Combinatorial games: tic-tac-toe theory. Cambridge University Press. pp. 1–3. ISBN 978-0-521-46100-9.
- ^ Robert A. Hearn; Erik D. Demaine (2009). Games, Puzzles, and Computation. A K Peters, Ltd.. ISBN 978-1-56881-322-6.
- ^ M. Tim Jones (2008). Artificial Intelligence: A Systems Approach. Jones & Bartlett Learning. pp. 106–118. ISBN 978-0-7637-7337-3.
- ^ Petrosjan, L.A. and Murzov, N.V. (1966). Game-theoretic problems of mechanics. Litovsk. Mat. Sb. 6, 423–433 (in Russian).
- ^ a b Hugh Brendan McMahan (2006), Robust Planning in Domains with Stochastic Outcomes, Adversaries, and Partial Observability, CMU-CS-06-166, pp. 3-4
- 1992. v. 1
- 1994. v. 2
- 2002. v. 3.
- "game theory" by Robert J. Aumann. Abstract.
- "game theory in economics, origins of," by Robert Leonard. Abstract.
- "behavioural economics and game theory" by Faruk Gul. Abstract.
- Camerer, Colin (2003), Behavioral Game Theory: Experiments in Strategic Interaction, Russell Sage Foundation, ISBN 978-0-691-09039-9 Description and Introduction, pp. 1–25.
- Dutta, Prajit K. (1999), Strategies and games: theory and practice, MIT Press, ISBN 978-0-262-04169-0 . Suitable for undergraduate and business students.
- Fernandez, L F.; Bierman, H S. (1998), Game theory with economic applications, Addison-Wesley, ISBN 978-0-201-84758-1 . Suitable for upper-level undergraduates.
- Fudenberg, Drew; Tirole, Jean (1991), Game theory, MIT Press, ISBN 978-0-262-06141-4 . Acclaimed reference text. Description.
- Gibbons, Robert D. (1992), Game theory for applied economists, Princeton University Press, ISBN 978-0-691-00395-5 . Suitable for advanced undergraduates.
-
- Published in Europe as Robert Gibbons (2001), A Primer in Game Theory, London: Harvester Wheatsheaf, ISBN 978-0-7450-1159-2 .
- Gintis, Herbert (2000), Game theory evolving: a problem-centered introduction to modeling strategic behavior, Princeton University Press, ISBN 978-0-691-00943-8
- Green, Jerry R.; Mas-Colell, Andreu; Whinston, Michael D. (1995), Microeconomic theory, Oxford University Press, ISBN 978-0-19-507340-9 . Presents game theory in formal way suitable for graduate level.
- edited by Vincent F. Hendricks, Pelle G. Hansen. (2007), Hansen, Pelle G.; Hendricks, Vincent F., eds., Game Theory: 5 Questions, New York, London: Automatic Press / VIP, ISBN 978-87-991013-4-4 . Snippets from interviews.
- Howard, Nigel (1971), Paradoxes of Rationality: Games, Metagames, and Political Behavior, Cambridge, Massachusetts: The MIT Press, ISBN 978-0-262-58237-7
- Isaacs, Rufus (1999), Differential Games: A Mathematical Theory With Applications to Warfare and Pursuit, Control and Optimization, New York: Dover Publications, ISBN 978-0-486-40682-4
- Leyton-Brown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary Introduction, San Rafael, CA: Morgan & Claypool Publishers, ISBN 978-1-59829-593-1, http://www.gtessentials.org . An 88-page mathematical introduction; free online at many universities.
- Miller, James H. (2003), Game theory at work: how to use game theory to outthink and outmaneuver your competition, New York: McGraw-Hill, ISBN 978-0-07-140020-6 . Suitable for a general audience.
- Myerson, Roger B. (1991), Game theory: analysis of conflict, Harvard University Press, ISBN 978-0-674-34116-6
- Osborne, Martin J. (2004), An introduction to game theory, Oxford University Press, ISBN 978-0-19-512895-6 . Undergraduate textbook.
- Papayoanou, Paul (2010), Game Theory for Business, Probabilistic Publishing, ISBN 978-0-9647938-7-3 . Primer for business men and women.
- Petrosyan, Leon; Zenkevich, Nikolay (1996), Game Theory (Series on Optimization, 3), World Scientific Publishers, ISBN 978-981-02-2396-0
- Osborne, Martin J.; Rubinstein, Ariel (1994), A course in game theory, MIT Press, ISBN 978-0-262-65040-3 . A modern introduction at the graduate level.
- Poundstone, William (1992), Prisoner's Dilemma: John von Neumann, Game Theory and the Puzzle of the Bomb, Anchor, ISBN 978-0-385-41580-4 . A general history of game theory and game theoreticians.
- Rasmusen, Eric (2006), Games and Information: An Introduction to Game Theory (4th ed.), Wiley-Blackwell, ISBN 978-1-4051-3666-2, http://www.rasmusen.org/GI/index.html
- Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, New York: Cambridge University Press, ISBN 978-0-521-89943-7, http://www.masfoundations.org . A comprehensive reference from a computational perspective; downloadable free online.
- Williams, John Davis (1954) (PDF), The Compleat Strategyst: Being a Primer on the Theory of Games of Strategy, Santa Monica: RAND Corp., ISBN 978-0-8330-4222-4, http://www.rand.org/pubs/commercial_books/2007/RAND_CB113-1.pdf Praised primer and popular introduction for everybody, never out of print.
- Roger McCain's Game Theory: A Nontechnical Introduction to the Analysis of Strategy (Revised Edition)
- Christopher Griffin (2010) Game Theory: Penn State Math 486 Lecture Notes, pp. 169, CC-BY-NC-SA license, suitable introduction for undergraduates
- Webb, James N. (2007), Game theory: decisions, interaction and evolution, Springer undergraduate mathematics series, Springer, ISBN 1-84628-423-6 Consistent treatment of game types usually claimed by different applied fields, e.g. Markov decision processes.
- Joseph E. Harrington (2008) Games, strategies, and decision making, Worth, ISBN 0-7167-6630-2. Textbook suitable for undergraduates in applied fields; numerous examples, fewer formalisms in concept presentation.
- Aumann, R.J. and Shapley, L.S. (1974), Values of Non-Atomic Games, Princeton University Press
- Cournot, A. Augustin (1838), "Recherches sur les principles mathematiques de la théorie des richesses", Libraire des sciences politiques et sociales (Paris: M. Rivière & C.ie)
- Edgeworth, Francis Y. (1881), Mathematical Psychics, London: Kegan Paul
- Farquharson, Robin (1969), Theory of Voting, Blackwell (Yale U.P. in the U.S.), ISBN 0-631-12460-8
- Luce, R. Duncan; Raiffa, Howard (1957), Games and decisions: introduction and critical survey, New York: Wiley
-
- Maynard Smith, John (1982), Evolution and the theory of games, Cambridge University Press, ISBN 978-0-521-28884-2
- Maynard Smith, John; Price, George R. (1973), "The logic of animal conflict", Nature 246 (5427): 15–18, Bibcode 1973Natur.246...15S, DOI:10.1038/246015a0
- Nash, John (1950), "Equilibrium points in n-person games", Proceedings of the National Academy of Sciences of the United States of America 36 (1): 48–49, DOI:10.1073/pnas.36.1.48, PMC 1063129, PMID 16588946, http://www.pnas.org/cgi/search?sendit=Search&pubdate_year=&volume=&firstpage=&DOI=&author1=nash&author2=&title=equilibrium&andorexacttitle=and&titleabstract=&andorexacttitleabs=and&fulltext=&andorexactfulltext=and&fmonth=Jan&fyear=1915&tmonth=Feb&tyear=2008&fdatedef=15+January+1915&tdatedef=6+February+2008&tocsectionid=all&RESULTFORMAT=1&hits=10&hitsbrief=25&sortspec=relevance&sortspecbrief=relevance
- Shapley, L. S. (1953), A Value for n-person Games, In: Contributions to the Theory of Games volume II, H. W. Kuhn and A. W. Tucker (eds.)
- Shapley, L. S. (1953), Stochastic Games, Proceedings of National Academy of Science Vol. 39, pp. 1095–1100.
- von Neumann, John (1928), "Zur Theorie der Gesellschaftsspiele", Mathematische Annalen 100 (1): p. 295–320. English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p p. 13-42. Princeton University Press.
- von Neumann, John; Morgenstern, Oskar (1944), Theory of games and economic behavior, Princeton University Press
- Zermelo, Ernst (1913), "Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels", Proceedings of the Fifth International Congress of Mathematicians 2: 501–4
- Ben David, S.; Borodin, Allan; Karp, Richard; Tardos, G.; Wigderson, A. (1994), "On the Power of Randomization in On-line Algorithms" (PDF), Algorithmica 11 (1): 2–14, DOI:10.1007/BF01294260, http://www.math.ias.edu/~avi/PUBLICATIONS/MYPAPERS/BORODIN/paper.pdf
- Bicchieri, Cristina (1993, 2nd. edition, 1997), Rationality and Coordination, Cambridge University Press, ISBN 0-521-57444-7
- Downs, Anthony (1957), An Economic theory of Democracy, New York: Harper
- Gauthier, David (1986), Morals by agreement, Oxford University Press, ISBN 978-0-19-824992-4
- Allan Gibbard, "Manipulation of voting schemes: a general result", Econometrica, Vol. 41, No. 4 (1973), pp. 587–601.
- Grim, Patrick; Kokalis, Trina; Alai-Tafti, Ali; Kilb, Nicholas; St Denis, Paul (2004), "Making meaning happen", Journal of Experimental & Theoretical Artificial Intelligence 16 (4): 209–243, DOI:10.1080/09528130412331294715
- Harper, David; Maynard Smith, John (2003), Animal signals, Oxford University Press, ISBN 978-0-19-852685-8
- Harsanyi, John C. (1974), "An equilibrium point interpretation of stable sets", Management Science 20 (11): 1472–1495, DOI:10.1287/mnsc.20.11.1472
- Levy, Gilat; Razin, Ronny (2003), "It Takes Two: An Explanation of the Democratic Peace", Working Paper, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=433844
- Lewis, David (1969), Convention: A Philosophical Study , ISBN 978-0-631-23257-5 (2002 edition)
- McDonald, John (1950 - 1996), Strategy in Poker, Business & War, W. W. Norton, ISBN 0-393-31457-X . A layman's introduction.
- Quine, W.v.O (1967), "Truth by Convention", Philosophica Essays for A.N. Whitehead, Russel and Russel Publishers, ISBN 978-0-8462-0970-6
- Quine, W.v.O (1960), "Carnap and Logical Truth", Synthese 12 (4): 350–374, DOI:10.1007/BF00485423
- Mark A. Satterthwaite, "Strategy-proofness and Arrow's Conditions: Existence and Correspondence Theorems for Voting Procedures and Social Welfare Functions", Journal of Economic Theory 10 (April 1975), 187–217.
- Siegfried, Tom (2006), A Beautiful Math, Joseph Henry Press, ISBN 0-309-10192-1
- Skyrms, Brian (1990), The Dynamics of Rational Deliberation, Harvard University Press, ISBN 0-674-21885-X
- Skyrms, Brian (1996), Evolution of the social contract, Cambridge University Press, ISBN 978-0-521-55583-8
- Skyrms, Brian (2004), The stag hunt and the evolution of social structure, Cambridge University Press, ISBN 978-0-521-53392-8
- Sober, Elliott; Wilson, David Sloan (1998), Unto others: the evolution and psychology of unselfish behavior, Harvard University Press, ISBN 978-0-674-93047-6
- Thrall, Robert M.; Lucas, William F. (1963), "Failed to parse (Missing texvc executable; please see math/README to configure.): n
-person games in partition function form", Naval Research Logistics Quarterly 10 (4): 281–298, DOI:10.1002/nav.3800100126
- Paul Walker: History of Game Theory Page.
- David Levine: Game Theory. Papers, Lecture Notes and much more stuff.
- Alvin Roth: Game Theory and Experimental Economics page - Comprehensive list of links to game theory information on the Web
- Adam Kalai: Game Theory and Computer Science - Lecture notes on Game Theory and Computer Science
- Mike Shor: Game Theory .net - Lecture notes, interactive illustrations and other information.
- Jim Ratliff's Graduate Course in Game Theory (lecture notes).
- Don Ross: Review Of Game Theory in the Stanford Encyclopedia of Philosophy.
- Bruno Verbeek and Christopher Morris: Game Theory and Ethics
- Elmer G. Wiens: Game Theory - Introduction, worked examples, play online two-person zero-sum games.
- Marek M. Kaminski: Game Theory and Politics - syllabuses and lecture notes for game theory and political science.
- Web sites on game theory and social interactions
- Kesten Green's Conflict Forecasting - See Papers for evidence on the accuracy of forecasts from game theory and other methods.
- McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. (2007) Gambit: Software Tools for Game Theory.
- Benjamin Polak: Open Course on Game Theory at Yale videos of the course
- Benjamin Moritz, Bernhard Könsgen, Danny Bures, Ronni Wiersch, (2007) Spieltheorie-Software.de: An application for Game Theory implemented in JAVA.